98 research outputs found

    Relaxed memory models: an operational approach

    Get PDF
    International audienceMemory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allowed to occur, and the programming language semantics is implicit as determining some of these constraints. In this work we propose a new approach to formalizing a memory model in which the model itself is part of a weak operational semantics for a (possibly concurrent) programming language. We formalize in this way a model that allows write operations to the store to be buffered. This enables us to derive the ordering constraints from the weak semantics of programs, and to prove, at the programming language level, that the weak semantics implements the usual interleaving semantics for data-race free programs, hence in particular that it implements the usual semantics for sequential code

    Relaxed Operational Semantics of Concurrent Programming Languages

    Full text link
    We propose a novel, operational framework to formally describe the semantics of concurrent programs running within the context of a relaxed memory model. Our framework features a "temporary store" where the memory operations issued by the threads are recorded, in program order. A memory model then specifies the conditions under which a pending operation from this sequence is allowed to be globally performed, possibly out of order. The memory model also involves a "write grain," accounting for architectures where a thread may read a write that is not yet globally visible. Our formal model is supported by a software simulator, allowing us to run litmus tests in our semantics.Comment: In Proceedings EXPRESS/SOS 2012, arXiv:1208.244

    TriCheck: Memory Model Verification at the Trisection of Software, Hardware, and ISA

    Full text link
    Memory consistency models (MCMs) which govern inter-module interactions in a shared memory system, are a significant, yet often under-appreciated, aspect of system design. MCMs are defined at the various layers of the hardware-software stack, requiring thoroughly verified specifications, compilers, and implementations at the interfaces between layers. Current verification techniques evaluate segments of the system stack in isolation, such as proving compiler mappings from a high-level language (HLL) to an ISA or proving validity of a microarchitectural implementation of an ISA. This paper makes a case for full-stack MCM verification and provides a toolflow, TriCheck, capable of verifying that the HLL, compiler, ISA, and implementation collectively uphold MCM requirements. The work showcases TriCheck's ability to evaluate a proposed ISA MCM in order to ensure that each layer and each mapping is correct and complete. Specifically, we apply TriCheck to the open source RISC-V ISA, seeking to verify accurate, efficient, and legal compilations from C11. We uncover under-specifications and potential inefficiencies in the current RISC-V ISA documentation and identify possible solutions for each. As an example, we find that a RISC-V-compliant microarchitecture allows 144 outcomes forbidden by C11 to be observed out of 1,701 litmus tests examined. Overall, this paper demonstrates the necessity of full-stack verification for detecting MCM-related bugs in the hardware-software stack.Comment: Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating System

    Cooking the Books: Formalizing JMM Implementation Recipes

    Get PDF
    The Java Memory Model (JMM) is intended to characterize the meaning of concurrent Java programs. Because of the model\u27s complexity, however, its definition cannot be easily transplanted within an optimizing Java compiler, even though an important rationale for its design was to ensure Java compiler optimizations are not unduly hampered because of the language\u27s concurrency features. In response, Lea\u27s JSR-133 Cookbook for Compiler Writers, an informal guide to realizing the principles underlying the JMM on different (relaxed-memory) platforms was developed. The goal of the cookbook is to give compiler writers a relatively simple, yet reasonably efficient, set of reordering-based recipes that satisfy JMM constraints. In this paper, we present the first formalization of the cookbook, providing a semantic basis upon which the relationship between the recipes defined by the cookbook and the guarantees enforced by the JMM can be rigorously established. Notably, one artifact of our investigation is that the rules defined by the cookbook for compiling Java onto Power are inconsistent with the requirements of the JMM, a surprising result, and one which justifies our belief in the need for formally provable definitions to reason about sophisticated (and racy) concurrency patterns in Java, and their implementation on modern-day relaxed-memory hardware. Our formalization enables simulation arguments between an architecture-independent intermediate representation of the kind suggested by Lea with machine abstractions for Power and x86. Moreover, we provide fixes for cookbook recipes that are inconsistent with the behaviors admitted by the target platform, and prove the correctness of these repairs

    Invariant Safety for Distributed Applications

    Get PDF
    We study a proof methodology for verifying the safety of data invariants of highly-available distributed applications that replicate state. The proof is (1) modular: one can reason about each individual operation separately, and (2) sequential: one can reason about a distributed application as if it were sequential. We automate the methodology and illustrate the use of the tool with a representative example.Comment: Workshop on Principles and Practice of Consistency for Distributed Data (PaPoC), Mar 2019, Dresden, Germany. https://novasys.di.fct.unl.pt/conferences/papoc19

    Consistency in 3D

    Get PDF
    Comparisons of different consistency models often try to place them in a linear strong-to-weak order. However this view is clearly inadequate, since it is well known, for instance, that Snapshot Isolation and Serialisability are incomparable. In the interest of a better understanding, we propose a new classification, along three dimensions, related to: a total order of writes, a causal order of reads, and transactional composition of multiple operations. A model may be stronger than another on one dimension and weaker on another. We believe that this new classification scheme is both scientifically sound and has good explicative value. The current paper presents the three-dimensional design space intuitively.Les comparaisons entre modèles de la cohérence tentent souvent de les classer dans un ordre linéaire, de faible à forte. Cette vue est clairement inadéquate, puisque il est bien connu que, par exemple, les modèles Snapshot Isolation et Serialisability sont incomparables. Dans l'intérêt d'une meilleure compréhension du domaine, nous proposons une nouvelle classification, en trois dimensions~: les garanties liées à un ordre total des écritures~; celles liées à un ordre causal des lectures~; et celles liées à la composition transactionelle d'opérations multiples. Un modèle peut être plus fort qu'un autre dans une dimension, et moins dans une autre. Nous pensons que ce nouveau schéma de classification, à la fois est scientifiquement valide, et a une bonne valeur explicative. Le présent rapport présente l'espace de conception en trois dimensions de façon intuitive

    Ram pressure statistics for bent tail radio galaxies

    Full text link
    In this paper we use the MareNostrum Universe Simulation, a large scale, hydrodynamic, non-radiative simulation in combination with a simple abundance matching approach to determine the ram pressure statistics for bent radio sources (BRSs). The abundance matching approach allows us to determine the locations of all galaxies with stellar masses >1011MSol> 10^{11} MSol in the simulation volume. Assuming ram pressure exceeding a critical value causes bent morphology, we compute the ratio of all galaxies exceeding the ram pressure limit (RPEX galaxies) relative to all galaxies in our sample. According to our model 50% of the RPEX galaxies at z=0z = 0 are found in clusters with masses larger than 1014.5MSol10^{14.5}MSol the other half resides in lower mass clusters. Therefore, the appearance of bent tail morphology alone does not put tight constraints on the host cluster mass. In low mass clusters, M<1014MSolM < 10^{14}MSol, RPEX galaxies are confined to the central 500 kpc whereas in clusters of >1015Msol> 10^{15}Msol they can be found at distances up to 1.5Mpc. Only clusters with masses >1015MSol> 10^{15}MSol are likely to host more than one BRS. Both criteria may prove useful in the search for distant, high mass clusters.Comment: 10 pages, 10 figures, Submitted to the Monthly Notices of the Royal Astronomical Societ

    Ensuring referential integrity under causal consistency

    Get PDF
    Referential integrity (RI) is an important correctness property of a shared, distributed object storage system. It is sometimes thought that enforcing RI requires a strong form of consistency. In this paper, we argue that causal consistency suffices to maintain RI. We support this argument with pseudocode for a reference CRDT data type that maintains RI under causal consistency. QuickCheck has not found any errors in the model

    Grupo de danças tradicionais gaúchas tradição cultura herança Tche UFRGS: uma década de história

    Get PDF
    Anais do 35º Seminário de Extensão Universitária da Região Sul - Área temática: CulturaA Extensão universitária tem papel fundamental na relação sociedade/universidade dando sustentação ao ensino público superior brasileiro. O projeto de Extensão Grupo de Danças Tradicionais Gaúchas Tradição Cultura Herança TCHE/UFRGS ao completar uma década de existência reafirma este compromisso social. O objetivo deste trabalho é realizar um apanhado sobre a atuação do grupo representando a UFRGS neste período, através do relato de experiência. Neste período de 10 anos o Grupo TCHE UFRGS representou a UFRGS em eventos regionais, nacionais e internacionais atingindo reconhecimento por sua conduta de respeito a sua história, cultura, construindo conhecimento a partir da compreensão do sujeito social, protagonista de suas ações e história, como cerne das praticas institucionais universitárias. A trajetória do grupo esta registrada em diversos meios como o programa “Conhecendo a UFRGS: Grupo TCHE”, o livro comemorativo “ 10 anos de causos & Histórias do Grupo TCHE UFRGS”, os DVDs de registro dos espetáculos realizados, como também o registro iconográfico resultado de inúmeras apresentações realizadas. Ao participar deste projeto que objetiva promover e difundir o patrimônio imaterial das danças tradicionais gaúchas os integrantes tem a oportunidade de, através de seu fazer extensionista, refletir sobre sua importância enquanto sujeitos propagadores destas práticas. Para além de representar a Universidade, o projeto possibilita a compreensão da realidade instigando o pensamento transformador que busca soluções para alterá-la criando oportunidades ao fazer a diferença na vida dos sujeitos envolvidos firmando as premissas do fazer extensionista na UFRG
    corecore